850 research outputs found

    Figures of merit and constraints from testing General Relativity using the latest cosmological data sets including refined COSMOS 3D weak lensing

    Full text link
    We use cosmological constraints from current data sets and a figure of merit (FoM) approach to probe any deviations from general relativity (GR) at cosmological scales. The FoM approach is used to study the constraining power of various combinations of data sets on modified gravity (MG) parameters. We use recently refined HST-COSMOS weak-lensing tomography data, ISW-galaxy cross correlations from 2MASS and SDSS LRG surveys, matter power spectrum from SDSS-DR7 (MPK), WMAP7 temperature and polarization spectra, BAO from 2DF and SDSS-DR7, and Union2 compilation of supernovae, in addition to other bounds from H_0 measurements and BBN. We use 3 parametrizations of MG parameters that enter the perturbed field equations. In order to allow for variations with redshift and scale, the first 2 parametrizations use recently suggested functional forms while the third is based on binning methods. Using the first parametrization, we find that CMB + ISW + WL provides the strongest constraints on MG parameters followed by CMB+WL or CMB+MPK+ISW. Using the second parametrization or binning methods, CMB+MPK+ISW consistently provides some of the strongest constraints. This shows that the constraints are parametrization dependent. We find that adding up current data sets does not improve consistently uncertainties on MG parameters due to tensions between best-fit MG parameters preferred by different data sets. Furthermore, some functional forms imposed by the parametrizations can lead to an exacerbation of these tensions. Next, unlike some studies that used the CFHTLS lensing data, we do not find any deviation from GR using the refined HST-COSMOS data, confirming previous claims in those studies that their result may have been due to some systematic effect. Finally, we find in all cases that the values corresponding to GR are within the 95% confidence level contours for all data set combinations. (abridged)Comment: 18 pages, 6 figures, matches version published in PR

    Probing Cosmic Acceleration Beyond the Equation of State: Distinguishing between Dark Energy and Modified Gravity Models

    Full text link
    If general relativity is the correct theory of physics on large scales, then there is a differential equation that relates the Hubble expansion function, inferred from measurements of angular diameter distance and luminosity distance, to the growth rate of large scale structure. For a dark energy fluid without couplings or an unusual sound speed, deviations from this consistency relationship could be the signature of modified gravity on cosmological scales. We propose a procedure based on this consistency relation in order to distinguish between some dark energy models and modified gravity models. The procedure uses different combinations of cosmological observations and is able to find inconsistencies when present. As an example, we apply the procedure to a universe described by a recently proposed 5-dimensional modified gravity model. We show that this leads to an inconsistency within the dark energy parameter space detectable by future experiments.Comment: 8 pages, 7 figures; expanded paper; matches PRD accepted version; corrected growth rate formula; main results and conclusion unchange

    Systematic mapping review on student’s performance analysis using big data predictive model

    Get PDF
    This paper classify the various existing predicting models that are used for monitoring andimproving students’ performance at schools and higher learning institutions. It analyses all theareas within the educational data mining methodology. Two databases were chosen for thisstudy and a systematic mapping study was performed. Due to the very infant stage of thisresearch area, only 114 articles published from 2012 till 2016 were identified. Within this, atotal of 59 articles were reviewed and classified. There is an increased interest and research inthe area of educational data mining, particularly in improving students’ performance withvarious predictive and prescriptive models. Most of the models are devised for pedagogicalimprovements ultimately. It is a huge scarcity in producing portable predictive models that fitsinto any educational environment. There is more research needed in the educational big data.Keywords: predictive analysis; student’s performance; big data; big data analytics; datamining; systematic mapping study

    Wavelet-based short-term load forecasting using optimized anfis

    Get PDF
    This paper focuses on forecasting electric load consumption using optimized Adaptive Neuro-Fuzzy inference System (ANFIS). It employs the use of Particle Swarm Optimization (PSO) to optimize ANFIS, with aim of improving its speed and accuracy. It determines the minimum error from the ANFIS error function and thus propagates it to the premise part. Wavelet transform was used to decompose the input variables using Daubechies 2 (db2). The purpose is to reduce outliers as small as possible in the forecasting data. The data was decomposed in to one approximation coefficients and three details coefficients. The combined Wavelet-PSO-ANFIS model was tested using weather and load data of Nova Scotia province. It was found that the model can perform more than Genetic Algorithm (GA) optimized ANFIS and traditional ANFIS, which is been optimized by Gradient Decent (GD). Mean Absolute Percentage Error (MAPE) was used to measure the accuracy of the model. The model gives lower MAPE than the other two models, and is faster in terms of speed of convergence

    Compact structure representation in discovering frequent patterns for association rules

    Get PDF
    Frequent pattern mining is a key problem in important data mining applications, such as the discovery of association rules, strong rules and episodes. Structure used in typical algorithms for solving this problem operate in several database scans and a large number of candidate generation. This paper presents a compact structure representation called Flex-tree in discovering frequent patterns for association rules. Flex-tree structure is a lexicographic tree which finds frequent patterns by using depth first search strategy. Efficiency of mining is achieved with one scan of database instead of repeated database passes done in other methods and avoid the costly generation of large numbers of candidate sets, which dramatically reduces the search space

    Large scale structure simulations of inhomogeneous LTB void models

    Full text link
    We perform numerical simulations of large scale structure evolution in an inhomogeneous Lemaitre-Tolman-Bondi (LTB) model of the Universe. We follow the gravitational collapse of a large underdense region (a void) in an otherwise flat matter-dominated Einstein-deSitter model. We observe how the (background) density contrast at the centre of the void grows to be of order one, and show that the density and velocity profiles follow the exact non-linear LTB solution to the full Einstein equations for all but the most extreme voids. This result seems to contradict previous claims that fully relativistic codes are needed to properly handle the non-linear evolution of large scale structures, and that local Newtonian dynamics with an explicit expansion term is not adequate. We also find that the (local) matter density contrast grows with the scale factor in a way analogous to that of an open universe with a value of the matter density OmegaM(r) corresponding to the appropriate location within the void.Comment: 7 pages, 6 figures, published in Physical Review

    Constraints and tensions in testing general relativity from Planck and CFHTLenS data including intrinsic alignment systematics

    Get PDF
    We present constraints on testing general relativity (GR) at cosmological scales using recent data sets and assess the impact of galaxy intrinsic alignment in the CFHTLenS lensing data on those constraints. We consider data from Planck temperature anisotropies, the galaxy power spectrum from the WiggleZ survey, weak-lensing tomography shear-shear cross-correlations from the CFHTLenS survey, integrated Sachs Wolfe-galaxy cross-correlations, and baryon acoustic oscillation data. We use three different parametrizations of modified gravity (MG), one that is binned in redshift and scale, a parametrization that evolves monotonically in scale but is binned in redshift, and a functional parametrization that evolves only in redshift. We present the results in terms of the MG parameters Q and Sigma. We employ an intrinsic alignment model with an amplitude A(CFHTLenS) that is included in the parameter analysis. We find an improvement in the constraints on the MG parameters corresponding to a 40-53% increase on the figure of merit compared to previous studies, and GR is found consistent with the data at the 95% confidence level. The bounds found on ACFHTLenS are sensitive to the MG parametrization used, and the correlations between ACFHTLenS and MG parameters are found to be weak to moderate. For all three MG parametrizations ACFHTLenS is found to be consistent with zero when the whole lensing sample is used; however, when using the optimized early-type galaxy sample a significantly nonzero A(CFHTLenS) is found for GR and the scale-independent MG parametrization. We find that the tensions observed in previous studies persist, and there is an indication that cosmic microwave background (CMB) data and lensing data prefer different values for MG parameters, particularly for the parameter Sigma. The analysis of the confidence contours and probability distributions suggest that the bimodality found follows that of the known tension in the sigma(8) parameter

    Calibration of ZMPT101B voltage sensor module using polynomial regression for accurate load monitoring

    Get PDF
    Smart Electricity is quickly developing as the results of advancements in sensor technology. The accuracy of a sensing device is the backbone of every measurement and the fundamental of every electrical quantity measurement is the voltage and current sensing. The sensor calibration in the context of this research means the marking or scaling of the voltage sensor so that it can present accurate sampled voltage from the ADC output using appropriate algorithm. The peakpeak input voltage (measured with a standard FLUKE 115 meter) to the sensor is correlated with the peak-peak ADC output of the sensor using 1 to 5th order polynomial regression, in order to determine the best fitting relationship between them. The arduino microcontroller is used to receive the ADC conversion and is also programmed to calculate the root mean square value of the supply voltage. The analysis of the polynomials shows that the third order polynomial gives the best relationship between the analog input and ADC output. The accuracy of the algorithm is tested in measuring the root mean square values of the supply voltage using instantaneous voltage calculation and peak-peak voltage methods. The error in the measurement is less than 1% in the peak-peak method and less than 2.5% in the instantaneous method for voltage measurements above 50V AC, which is very good for measurements in utility. Therefore, the proposed calibration method will facilitate more accurate voltage and power computing for researchers and designers especially in load monitoring where the applied voltage is 240V or 120V ranges

    A Novel Performance Metric for Building an Optimized Classifier

    Get PDF
    Problem statement: Typically, the accuracy metric is often applied for optimizing the heuristic or stochastic classification models. However, the use of accuracy metric might lead the searching process to the sub-optimal solutions due to its less discriminating values and it is also not robust to the changes of class distribution. Approach: To solve these detrimental effects, we propose a novel performance metric which combines the beneficial properties of accuracy metric with the extended recall and precision metrics. We call this new performance metric as Optimized Accuracy with Recall-Precision (OARP). Results: In this study, we demonstrate that the OARP metric is theoretically better than the accuracy metric using four generated examples. We also demonstrate empirically that a naĂŻve stochastic classification algorithm, which is Monte Carlo Sampling (MCS) algorithm trained with the OARP metric, is able to obtain better predictive results than the one trained with the conventional accuracy metric. Additionally, the t-test analysis also shows a clear advantage of the MCS model trained with the OARP metric over the accuracy metric alone for all binary data sets. Conclusion: The experiments have proved that the OARP metric leads stochastic classifiers such as the MCS towards a better training model, which in turn will improve the predictive results of any heuristic or stochastic classification models

    Recent approaches and applications of non-intrusive load monitoring

    Get PDF
    The Appliance Load Monitoring is vital in every energy consuming system be it commercial, residential or industrial in nature. Traditional load monitoring system, which used to be intrusive in nature require the installation of sensors to every load of interest which makes the system to be costly, time consuming and complex. Nonintrusive load monitoring (NILM) system uses the aggregated measurement at the utility service entry to identify and disaggregate the appliances connected in the building, which means only one set of sensors is required and it does not require entrance into the consumer premises. We presented a study in this paper providing a comprehensive review of the state of art of NILM, the different methods applied by researchers so far, before concluding with the future research direction, which include automatic home energy saving using NILM. The study also found that more efforts are needed from the researchers to apply NILM in appliance energy management, for example a Home Energy Management System (HEMS)
    • …
    corecore